与崔艺珍一起解读人工智能科学 | 盖茨播客
My latest podcast is all about learning something new.
What do you do when you can’t solve a problem? I like to talk to smart people who can help me understand the subject better. I call this process “getting unconfused”—and I think it is one of the best ways to learn something new. In my new podcast, I try to get unconfused about some of the things that fascinate me.
我的最新播客专注于探讨学习新事物的过程。
当遇到难题难以解决时,你会怎么办?我喜欢与那些聪明的人交流,借助他们的帮助来更好地理解问题。我将这个过程称之为“解惑”,我认为这是学习新事物最好的途径之一。在我的新播客中,我尝试通过与专业人士的交流来消除自己对一些令我着迷的课题的困惑。
播客音频
YEJIN CHOI: Current AI, the fact that it’s so opaque, and nobody knows what’s going on under the hood - that’s just not healthy.
BILL GATES: I'm lucky to be able to talk to experts – getting people who've got different backgrounds, different perspectives – that stimulates my thinking. I feel privileged that I get access to people who can do that. I call that ‘getting unconfused.’
[music]
Welcome to Unconfuse Me. I’m Bill Gates.
[music]
崔艺珍:当前的人工智能太不透明了。没人知道引擎盖下面发生了什么。这是不健康的。
比尔·盖茨:我很幸运能与专家交流,与具有不同背景和观点的人交谈能激发我的思考。我很荣幸能够接触到能够帮助我消除困惑的人。我把这称之为“解惑”。
[音乐]
欢迎来到《给自己解惑》。我是比尔·盖茨。
[音乐]
My guest today is Dr. Yejin Choi. She’s a Computer Science Professor at the University of Washington, Senior Resource Manager at the Allen Institute for AI, and recipient of a MacArthur Fellowship. She does amazing work on AI training systems, including looking at natural language and common sense. She gave a great TED Talk this year, entitled, "Why AI is Incredibly Smart, and Shockingly Stupid."
今天我的嘉宾是崔艺珍博士。她是华盛顿大学的计算机科学教授,同时也是艾伦人工智能研究所的高级资源经理,曾获得麦克阿瑟奖学金,并在人工智能培训系统方面做出了出色的工作,包括对自然语言和常识的研究。今年,她做了一场很棒的TED演讲,题目是“为什么人工智能非常聪明,但同时也蠢得惊人。”
Welcome, Yejin.
欢迎艺珍。
YEJIN CHOI: Thank you so much, Bill. I’m so excited to be here.
崔艺珍:非常感谢你,比尔。很高兴来到这里。
BILL GATES: Are you surprised at the advances that have come in the last several years?
比尔·盖茨:你对过去几年的进步感到惊讶吗?
YEJIN CHOI: Oh, yes, definitely. I didn’t imagine it would become this impressive.
崔艺珍:哦,是的,绝对的。我没想到会取得这么惊人的进展。
BILL GATES: What’s strange to me, is that we create these models, but we don’t really understand how the knowledge is encoded. To see what’s in there, it’s almost like a black box, although we see the innards, and so understanding why it does so well, or so poorly, we’re still pretty naive.
比尔·盖茨:对我来说,奇怪的是,我们创建了这些模型,但我们实际上并不了解其中的知识是如何被编码的,就像不知道模型里到底有什么。它就像一个黑匣子,虽然我们能看到内部结构,但要理解它为什么能做得这么好或这么差,我们仍然对此相当不了解。
YEJIN CHOI: One thing I’m really excited about is our lack of understanding on both types of intelligence, artificial and human intelligence. It really opens new intellectual problems. There’s something odd about how these large language models, that we often call LLMs, acquire knowledge in such an opaque way. It can perform some tests extremely well, while surprising us with silly mistakes somewhere else.
崔艺珍:是的,有一件事让我感到非常激动,那就是我们对人工智能和人类智能这两种类型的理解都存在不足。这确实为我们带来了一些新的有关智能的问题.关于这些大语言模型,我们通常称之为LLMs,这些模型以一种不透明的方式吸纳知识,有些地方它可以表现得非常出色,而在其他地方却会给我们带来愚蠢的错误,这让人感到有些惊讶。
BILL GATES: It’s been interesting that, even when it makes mistakes, sometimes if you just change the prompt a little bit, then all of a sudden, even that boundary is somewhat fuzzy, as people play around.
比尔·盖茨:是的,有趣的是,即使它犯了错误,有时只要你稍微改变一下提示,突然之间,随着人们的调整,甚至连错与对的边界都变得模糊不定。
YEJIN CHOI: Totally. Quote-unquote "prompt engineering" became a bit of a black art where some people say that you have to really motivate the transformers in the way that you motivate humans. One custom instruction that I found online was supposed to be about how you first tell LLM’s “you are brilliant at reasoning, you really think carefully,” then somehow the performance is better, which is quite fascinating. But I find two very divisive reactions to the different results that you can get from prompt engineering. On one side, there are people who tend to focus primarily on the success case. So long as there is one answer that is correct, it means the transformers, or LLMs, do know the correct answer; it’s your fault that you didn’t ask nicely enough. Whereas there is the other side, the people who tend to focus a lot more on the failure cases, therefore nothing works.
崔艺珍:是的,完全正确,“提示工程”变成了一种魔术,有些人说你必须像激励人类一样激励transformers,就像我在网上找到的一个自定义指令是关于你如何首先告诉大语言模型(LLMs)“你很擅长推理,请认真思考”,然后某种程度上它就表现得更好了,这很吸引人。但我发现,对于从提示工程中获得的不同结果,人们有两种截然不同的反应。一方面,有些人倾向于重点关注成功案例。只要有一个答案是正确的,这就意味着transformers或LLMs确实知道正确答案,那是人们的错,是人们问得不够好。而另一类人则倾向于更多地关注失败案例,怎么样都行不通。
Both are some sort of extremes. The answer may be somewhere in between, but this does reveal surprising aspects of this thing. Why? Why does it make these kinds of mistakes at all?
两者都是某种极端。而答案可能就在两者之间。但这确实揭示了这件事令人惊讶的一面。为什么?它为什么会犯这样的错误?
BILL GATES: We saw a dramatic improvement from the models the size of GPT-3 going up to the size of ChatGPT-4. I thought of 3 as kind of a funny toy, almost like a random sentencegenerator that I wrote 30 years ago. It was better than that, but I didn’t see it as that useful. I was shocked that ChatGPT-4 used in the right way can be pretty powerful. If we go up in scale, say another factor of 10 or 20 above GPT-4, will that be a dramatic improvement, or a very modest improvement? I guess it’s pretty unclear.
比尔·盖茨:从GPT-3到ChatGPT-4的模型大小,我们看到了巨大的进步。我的意思是,我认为GPT-3像是一个有趣的玩具,就像我30年前写的随机句子生成器。它比我那个强,不过我觉得它没那么有用。但让我震惊的是,以正确方式使用的ChatGPT-4能这么厉害。如果我们扩大规模,比GPT-4再大10倍或20倍,这将是一个巨大的进步,还是一个微不足道的进步呢?我想这还很难判断。
YEJIN CHOI: Good question, Bill. I honestly don’t know what to think about it. There’s uncertainty, is what I’m trying to say. I feel there’s a high chance that we’ll be surprised again, by an increase in capabilities. And then we will also be really surprised by some strange failure modes. More and more, I suspect that the evaluation will become harder, because people tend to have a bias towards believing the success case. We do have cognitive biases in the way that we interact with these machines. They are more likely to be adapted to those familiar cases, but then when you really start trusting it, it might betray you with unexpected failures. Interesting time, really.
崔艺珍:是啊,问得好,比尔。老实说,我也不知道该怎么判断。但我想说存在着不确定性。我觉得,我们很有可能再次被震撼到,被增强的能力吓一跳。然后,一些奇怪的故障模式也会让我们大吃一惊。而且,我认为评估会越来越难,因为人们往往偏向于相信成功案例。此外,我们在与这些机器互动时也会产生认知偏差。它们更能适应那些熟悉的案例,但当你真正开始信赖它时,它可能会用意想不到的失败背叛你。这真是一个有趣的时代。
BILL GATES: One domain that is almost counterintuitive that it’s not as good at is mathematics. You almost have to laugh that something like a simple Sudoku puzzle is one of the things that it can’t figure out, whereas even humans can do that.
比尔·盖茨:是的,它不擅长数学这个领域,这近乎反常。你可能要笑了,像简单的数独谜题就是它无法解决的问题之一,而即使是人类也能做到这一点。
YEJIN CHOI: Yes, it’s like reasoning in general, that humans are capable of, that these ChatGPT are not as reliable right now. The reaction to that in the current scientific community, it’s a bit divisive. On one hand, that people might believe that with more scale, the problems will all go away. Then there’s the other camp who tend to believe that, wait a minute, there’s a fundamental limit to it, and there should be better, different ways of doing it that are much more efficient. I tend to believe the latter. Anything that requires a symbolic reasoning can be a little bit brittle. Anything that requires a factual knowledge can be brittle. It’s not a surprise when you actually look at the simple equation that we optimize for training these larger language models because, really, there’s no reason why suddenly such capability should pop out.
崔艺珍:是的,这就像一般推理,人类有能力进行推理,而相反这些ChatGPT在当下并不可靠。我认为,目前科学界对此的反应有所分歧。一方面,人们或许认为,只要扩大规模,问题就会迎刃而解。而另一个阵营的人则倾向于认为,等等,规模是有极限的,一个基本限度。应该还有更好的、不同的方法来解决这个问题,而且计算效率更高。我倾向于相信后者。但任何需要符号推理的东西都可能有点不稳定。任何需要事实知识的东西都可能很脆弱。当你真正看到我们在训练这些更大的语言模型时优化的简单方程时,并不会感到奇怪,实际上,根本不会一下子突然出现这种程度的计算能力。
BILL GATES: I wonder if the future architecture may have more of a self-understanding of reusing knowledge in a much richer way than just this forward-chaining set of multiplications.
比尔·盖茨:我猜未来的架构可能会有更多的自我理解,以更丰富的方式重用知识,而不仅仅是这种前向链接算法。
YEJIN CHOI: Yes, right now the transformers, like GPT-4, can look at such a large amount of context. It’s able to remember so many words as spoken just now. Whereas humans, you and I, we both have a very small working memory. The moment we hear new sentences from each other, we kind of forget exactly what you said earlier, but we remember the abstract of it. We have this amazing capability of abstracting away instantaneously and have such a small working memory, whereas right now GPT-4 has enormous working memory, so much bigger than us. But I think that’s actually the bottleneck, in some sense, hurting the way that it’s learning, because it’s just relying on the patterns, a surface of patterns overlay, as opposed to trying to abstract away the true concepts underneath any text.
崔艺珍:是的,我的意思是,现在的transformers,比如GPT-4,可以查看大量的上下文,它能够记住现在所说的那么多单词。而人类,你和我,都只有很差的工作记忆。因此,当我们听到对方说的新句子时,我们会忘记对方刚才说了些什么,但我们能记住其中的关键词。我们拥有这种瞬间抓住关键词的惊人能力,也有着如此差的工作记忆,而现在,GPT-4拥有强大的工作记忆,比我们强得多。但我认为,从某种意义上说,这其实是瓶颈所在,因为它只是依赖于模式,一种表面模式的叠加,而不是试图抽离出任何文本下面的真正概念。
BILL GATES: One of the areas that the Gates Foundation would love to see is a kind of math tutor. There’s a question, do you need a big, big, big, big model to do that? Because if you make it so big, then our ability to know how it behaves, it’s hard to test. We’re hoping that one of the more medium-size models that mostly learn math textbooks, and won’t have such a broad knowledge of the world, we’re hoping that that will let us do quality assurance.
比尔·盖茨:盖茨基金会希望看到的一个领域是数学辅导,但有一个问题是,你是否需要一个超级大的模型来做这件事?要知道,如果你把模型做得太大,我们就很难知道它是如何运作的,也就很难进行测试。因此,我们希望能有一个中等规模的模型,主要学习数学课本,而不用对世界有广泛的了解,我们希望这样能让我们保证质量。
YEJIN CHOI: In academia, there’s actually a lot of such effort going on, but without a lot of compute. Including my own work that tries to develop a special model. Usually, the smaller models cannot win over ChatGPT in all dimensions, but if you have a target task, like a math tutoring, I do believe that definitely, not only you can close the gap with larger models, you can actually surpass the larger model’s capability by specializing on it. This is totally doable, and I believe in it.
崔艺珍:在学术界,实际上有很多这样的尝试正在进行,但涉及计算类的并不多。包括我自己的工作,则是试图开发特殊的模型。所以通常,较小的模型不能在所有方面都胜过ChatGPT,但如果你有一个目标任务,比如数学辅导,我相信,你不仅可以缩小与较大模型的差距,实际上你可以通过专攻它来超越较大模型。这完全是可行的。我相信这一点。
BILL GATES: Certainly for something like drug discovery, knowing English isn’t necessary. It’s kind of weird, these models are so big that very few people get to probe them or change them in some way. And yet, in the world of Computer Science, the majority of everything that was ever invented was invented in universities. To not have this in a form that people can play around with, and take a hundred different approaches to play around with, we have to find some way to fix that, to let universities be pushing these things, and looking inside these things.
比尔·盖茨:是的,对于药物发现这样的工作来说,懂英语并不是必要的。而且,有点奇怪的是,这些模型如此庞大,以至于很少有人能以某种方式去探究它们或改变它们。要知道,在计算机科学的世界里,绝大多数发明都是在大学里发明的。我们必须找到某种方法来解决这个问题,让大学来推动这些东西,并深入研究它们。
YEJIN CHOI: I couldn’t agree with you more. It cannot be very healthy to see this concentration of powers so that the major AI is only held by a few tech companies, and nobody knows what’s going on under the hood. That’s just not healthy. Especially when it is extremely likely that there is a moderate size solution that is open, that people can investigate and better understand and even better control, actually. Because if you open it, it’s so much easier for you to adapt it into your custom use cases, compared to the current way of using GPT-4, which all that you can do is sort of a prompt engineering, and then hope that it understood what you meant. The math tutoring case seems to be the case, where the language models have seen a lot of educational material already out there online. So, that probably is, indeed, much more around the corner, because it has seen a lot of data. Whereas the drug discovery, now the challenge is for AI to come up with something new that doesn’t exist yet. I suspect that that’s a different type of a challenge for AI, because now it truly needs to reason more in a symbolic manner that is grounded in knowledge, as opposed to, ‘oh, there’s a bunch of the sequences, and let’s predict what comes next and get lucky.’ That’s inspiring for me to think about, the different types of challenges and what it might take in order to push things to the next level.
崔艺珍:是的,我完全同意你的观点。人工智能的权利集中是不健康的,大量人工智能只被少数几家科技公司掌握,没有人知道引擎盖下面发生了什么。这并不健康。特别是当它极有可能有一个中等大小的开放的解决方案,人们就可以对其进行研究,更好地理解,甚至更好地控制。因为与当前使用GPT-4的方式相比,如果开放GPT-4,你就可以更容易地将其应用到你的自定义案例中,而你所能做的只有提示工程,并希望它能理解你的意思。在数学辅导案例中,语言模型似乎已经掌握了大量的在线教育材料。因此,由于它已经获取了大量的数据,可能确实会有更多事物出现。而在药物发现方面,现在的挑战是人工智能要发现还不存在的新东西。所以,我认为这对人工智能来说是一种不一样的挑战,现在它确实需要更多的以知识为基础、象征性地来进行更多推理,而不是“哦,这有一堆序列,让我们预测接下来会发生什么,然后碰碰运气”。这对我来说很有启发,让我去思考不同类型的挑战,以及如何才能把事情推向新的高度。
I think that’s basically the future. I am excited to see a lot more open-source effort, really catching up rapidly right now, the fact that it’s just so opaque. Current learning is unbelievably brute force, which I don’t think is the correct way of doing intelligence. There must be a better solution. And for that, we have to open it. In order to be able to really promote better science around it, we need to open it. We don’t have to open the largest or best one, however, because even if you open it, it’s not like academic people can do anything with it. If GPT-4 is open for me, there’s no compute for me to run all those!
但是,我认为这就是未来。我很高兴看到更多的开源项目正在迎头赶上,但事实上,它还是如此不透明。目前的学习方式非常粗放,我认为这不是实现人工智能的正确方式。必须有更好的解决方案。为此,我们必须开源。为了能够真正促进更好的科学发展,我们需要开源。不过,我们不必开放最大或最好的那个,因为即使你开放了它,学术界人士也不一定能用它做什么。就像哪怕GPT-4对我开放,我也没有足够的计算能力来运行它们!
BILL GATES: I think to deal with the complexity and the accuracy you probably want to build these things from scratch.
比尔·盖茨:是的,我认为,为了兼顾复杂性和准确性,你可能想从头开始构建这些东西。
YEJIN CHOI: I believe, with a bit more effort, something like that could be built. And with that wishful thought, I’m also working toward that sort of a system where we might have a little bit more explainable, descriptive knowledge that we can give to the machine to really, truly learn and memorize. Then when it does make mistakes, being able to control the machine through, ‘Oh, what kind of knowledge are you assuming for that kind of answer?,’ and being able to provide, ‘Oh, you know, your assumption is wrong that way. From here on, learn this knowledge.’ Those kind of problems unlock really exciting new types of machine learning problems, where you need to be able to unlearn, not just learn, but unlearn the incorrect knowledge, and then be able to revise over that in the way that humans also are able to. Whereas right now, everything, like you said, is a bit too black box. But I do think that with effort, that this sort of technology could happen.
崔艺珍:是的,我相信,只要再多一点努力,这样的东西是可以造出来的。我一厢情愿地认为,我也在为建立这样的系统而努力,在这个系统中,我们可能会有更多可解释、可描述的知识,我们可以把这些知识提供给机器,让它真正地学习和记忆。然后,当机器犯错时,我们可以通过提问“哦,你对这种答案假设了哪些知识?”来控制机器,并给出建议,“哦,你知道,你的假设是错误的。从现在开始,学习这些知识吧。”我认为这类问题能真正开启令人激动的机器学习的新模式——不只是学习,而能够摆脱——摆脱错误的知识,然后以人类能够干预的方式予以修正。而现在,就像你说的,一切都太暗箱化了。但我认为,只要努力,这种技术是可以实现的。
BILL GATES: Someday maybe we’ll understand how knowledge is represented in the human brain. It’s one of the great mysteries of how evolution did that. Let’s say we figure out both the software and the real brain, do you think we’ll end up seeing that there are similar algorithms underlying how they work?
比尔·盖茨:是的,也许有一天,我们会明白人类大脑是如何呈现知识的。这是进化过程中的一大谜团。假设我们弄清楚了软件和真正的大脑,你认为我们最终会发现它们的工作原理有类似的算法吗?
YEJIN CHOI: Oh, good question. What do you think?
崔艺珍:好问题。你觉得呢?
BILL GATES: I think there are aspects, like visual recognition, where we can see that, as you go up, and you’re trying to go to higher-level representations, that some of the same mistakes that the human visual system makes, weirdly appear in these systems. So that at least suggests that there’s a common way. Evolution was sort of trying out different approaches. So it may be that there’s this one fundamental approach that we see a glimpse of in software that evolution “discovered” and managed to use. It’s the greatest miracle that humans’ reasoning capability is so phenomenal.
比尔·盖茨:我认为,在视觉识别等方面,我们可以看到,当你试图进入更高层次的表征时,一些人类视觉系统犯的错误同样会奇怪地出现在这些软件系统中。所以这至少表明它们展现出了一种共性。进化本身就是在尝试不同的途径。所以可能存在一种基本法,我们在软件中可以窥见一斑,而进化“发现”并设法使用了这种方法。我想说,人类的推理能力是如此惊人,这是最伟大的奇迹。
YEJIN CHOI: Yes, totally. Evolution somehow figured out the algorithm behind our amazing learning capabilities, but we humans haven’t figured out the AI version of it yet. I suspect that there’s definitely a better algorithm out there that we haven’t discovered. It’s just right now, there’s a bit too much focus on, ‘let’s make things larger,’ and everybody’s trying to do that. Whereas there may be really a better solution, an alternative solution that’s waiting to be found, but there’s just not enough of an attention there. Because people tend to think, ‘Oh, it’s not going to work.’
崔艺珍:对,完全正确。是的,就像进化在某种程度上发现了我们惊人的学习能力背后的算法,但我们人类还没有发现它的人工智能版本。我猜肯定有更好的算法而我们还没有发现。但现在,我们过分关注于把东西做大,每个人都在竭力这么做。然而,也许真的有更好的解决方案,可供选择的解决方案正在等待被发现,只是没有得到足够的关注。因为人们上来就会想,“哦,这行不通。”
Let’s go back to Microsoft and the very first personal computer, because when that first came out, it was really super-exciting and amazing. Then every single year, there’s a better computer and smaller computer, a faster computer, and it becomes better and better. Similarly, when we look at phones, rockets, cars – the very first invention is never the optimal solution. There’s always a better solution. I do think that there’s a better solution, it’s just that right now, there’s way too much emphasis on the bigger the better.
让我们回到微软和第一台个人电脑,当它们第一次问世时,真的是超级令人兴奋和惊奇。然后每一年,都会问世更好的电脑、更小的电脑、更快的电脑,它变得越来越好。同样,当我们看手机或火箭、汽车时,第一项发明从来都不是最佳解决方案。总有更好的解决方案。所以我确实认为有更好的解决方案,只是人们现在过于强调越大越好。
BILL GATES: I do think, like in a math tutor case, though, the downside of a mistake can be pretty modest. And I think we are seeing that we should be able, give us two or three years, to create something there, that is pretty profound for engaging learners in a way that’s motivating and at the right level for them. That’ll be a pioneering test, that is not the same as relying on it for dangerous decisions.
比尔·盖茨:我确实认为,比如数学辅导的例子,错误的负面影响可能很小。我认为,我们应该能够在两三年内创造出一些东西,这些东西能够以一种适合学习者的水平的方式来激励他们,并让他们参与到学习中来。这将是一个开创性的试验,这与靠它来做出危险的决定这件事是不一样的。
YEJIN CHOI: I totally agree.
崔艺珍:我完全同意。
BILL GATES: Are you worried that things could go too fast, and almost have humans ignore the control and the misuse? The sense of purpose of humans, if we’re sort of dumb, compared to the AI, I’m more worried about that now than I was a few years ago.
比尔·盖茨:你是否担心事情会发展得太快,人类甚至会忽略控制和滥用?你知道的,人类的目标感——如果我们和人工智能相比有点愚蠢,那——我现在比几年前更担心这个问题。
YEJIN CHOI: Even I get a bit of uneasy feeling, if hypothetically, suddenly, AGI does arrive and it’s all around better than us. How are we supposed to think about that? Are they going to replace all of us and we just go vacation all the time? That sounds really boring. Although that thought experiment is quite interesting, even if that doesn’t happen, I worry that AI is impacting human life a lot already. And it will do so even more in the coming years. It seems that, unless we put the right kinds of efforts, trying to understand where the limitations and capabilities are, and then try to develop both the policy but also other AI techniques that can better control this impact on humans – if we don’t put in enough effort, this could be disastrous. If we’re not ready for it, it could be very hard on us. I’m at least optimistic that more and more people worry about this, and then there’s a lot more conversation going on, so I hope that it’s a sign of people doing more actions around it. But yes, it’s a concern.
崔艺珍:如果假设通用人工智能(AGI)真的突然出现,而且它在各方面都比我们强,连我都会有一点不安的感觉。我们该怎么办?比如人们会不会发现——用人工智能取代我们所有人,然后我们就永远放假了?那这样的话也真的很无聊,的确,这个思想实验相当有趣,但即使这不会发生,我仍然担心人工智能已经对人类生活产生了极大影响。而且在未来几年里,影响还会更大。看来,除非我们朝正确的方向努力,试图了解人工智能的局限性和能力所在,然后尝试制定政策和开发其他人工智能技术,以更好地控制对人类的影响,否则我认为这可能会——如果我们投入的不够多,这可能会是灾难性的。所以,如果我们还没有做好准备,这对我们来说可能会非常艰难。所以我希望——我至少乐观地认为,越来越多的人来关注这个问题,然后就会产生更多的讨论,而我希望这是人们围绕它采取更多行动的迹象。但确实,这是一个令人担忧的问题。
BILL GATES: I thought that we would get the super-capable kind of blue-collar robots way before this reading and writing thing became at least somewhat possible. The inversion that we don’t know how to pick parts out of a box, but we know how to rewrite the Pledge of Allegiance the way Donald Trump would write it. Those two tasks, the robot task I thought of as much easier, and so it would come first.
比尔·盖茨:我以为至少我们在人工智能的读写能力成为可能之前,就能拥有超级蓝领机器人。我们不知道如何从盒子里挑选零件,但我们知道如何按照唐纳德·特朗普的写法重写《效忠宣誓》。这两件事相比,蓝领机器人这件事我认为要简单得多,因此,它会早日问世。
YEJIN CHOI: That’s a really sharp observation, Bill, and there’s actually a thing about it, which is Moravec’s paradox, which is that the perceptual tasks that look seemingly easier for humans are actually much harder for AI, compared to say, a chess game, which is harder for us, which is actually easier for AI. In fact, that inversion happens in other ways as well. I’m currently proposing this thought, a generative AI paradox, where it might be that somehow generative capabilities are stronger than the understanding capabilities, which again, may be a little bit inversed version of how humans tend to be able to understand amazing novels, but we find it harder to write. And again, paintings we can appreciate without being able to generate those great paintings. Whereas right now, it looks as if these capabilities are a little bit reversed. Because when you look at DALL-E 2, DALL- E 3, it’s able to generate amazing images, but then there’s no amazing current AI that truly understands the image content in a way that surprises us. They are lagging behind, weirdly enough, so it might be that between generation and understanding capabilities, there’s something interestingly reversed about it.
崔艺珍:是的,比尔,这真是一个非常敏锐的观察,实际上还有一个莫拉维克悖论,即对人类来说看似更容易的感知任务,实际上对人工智能来说要难得多,比方说,对我们来说更难的国际象棋游戏,实际上对人工智能来说更容易。事实上,这种反转还发生在其他方面。因此,我目前提出了这样一个想法,即人工智能的生成悖论,可能在某种程度上,生成能力比理解能力更强大,这也可能是人类往往能够读懂令人惊叹的小说,但却发现很难把小说写出来。同样,我们可以欣赏画作,却无法创作出那些伟大的画作。而现在,这些能力似乎有点颠倒了。因为当你看DALL-E 2、DALL-E 3时,它们能够生成令人惊叹的图像,但目前还不存在能够震惊到我们的,真正理解图像内容的强大人工智能。很奇怪,人工智能落后了,也许在生成能力和理解能力之间,有一些有趣的颠覆性的反转。
BILL GATES: But it’s almost a paradox that in the near term, the risk is that we overuse it, like take advice from it, and it would be wrong. In the long run, maybe the fear is that it’s too good. In your talk, you expressed that: because it’s such a different kind of intelligence, it’s both the “smartest” by some definitions, and the “dumbest,” like in medical applications. My foundation would love to have the equivalent of a doctor for poor people who can never get access to that expertise. But how do we test that? How cautious do we need to be when we have a hard time characterizing what we’ve got here?
比尔·盖茨:但这几乎是一个悖论,在短期内,风险在于我们过度使用它,比如从它那里获取建议,那就错了。从长远来看,或许我们担心的是它过于好了。我的意思是,你在演讲中已经表达了这一点——它是一种与众不同的智能,在某些定义中,它既是最聪明的,也是最愚蠢的,比如在医疗应用中,我的基金会希望能为贫困人群提供相当于医生的人工智能医疗服务,他们永远无法接触到这类专业人士。但是,我们该如何测试呢?当我们很难印证我们所拥有的东西时,我们需要多谨慎呢?
YEJIN CHOI: Part of me wonders whether that hypothetical, AGI-like capability, if it did exist, and if it’s so good, can it actually answer some of the hardest questions that humanity faces like climate change? Again, some people disagree, what is it doing? And can AI really help answer those kinds of questions in such a satisfactory, such a high quality, reliable manner? If AGI really truly comes, I don’t know. Is it actually going to be good enough for that kind of purpose? That relates to your wish about doctors. We somehow need to create these AI technologies that can benefit humanity better, but are they actually going to be super-reliable? How much of a gap will there be? I think that’s very uncertain right now. We want to believe that it’s around the corner, in some sense, especially those technologies that can be really beneficial for humanity.
崔艺珍:我在想,如果真的存在类似通用人工智能的能力,如果它真的如此优秀,它是否真的能解决人类面临的一些最艰难、最棘手的问题,比如气候变化?你知道,有些人不同意这个观点,通用人工智能到底是做什么的?它们真的能以一种令人满意的、高质量的、可靠的方式解决这些问题吗?如果通用人工智能真的进入市场,我不知道。对于这种目的来说,它真的足够好吗?这和你提到的关于医生那件事的期望有关。我们需要创造出能够更好地造福人类的人工智能技术,但它们真的可靠吗?现实中会有多大的差距?我认为这是非常不确定的。从某种意义上说,我们希望相信那些技术就在不远处,尤其是能真正造福人类的技术。
BILL GATES: In my twenties, I definitely thought, like for language translation, that there would just be a set of processing steps. This is a noun, this is a verb, and that it would be an explicit piece of logic. When Google found that their logic approach, which was a pretty large team, hundreds, was just beaten by their neural net approach – that was the beginning of this mind-blowing thing. So yes, we are often naive, particularly about what it takes to match human capability.
比尔·盖茨:不,在我二十几岁的时候,我绝对认为,像是语言翻译,它只需要一系列的处理步骤罢了。这是一个名词,这是一个动词,这是一种明确的逻辑。当谷歌发现他们的逻辑算法,一个相当大的数百人的团队,败给了神经网络算法,那真是震撼人心的一刻。我们常常太天真了,尤其是在如何与人类能力相匹配的问题上。
YEJIN CHOI: I don’t know for sure whether we’re really around the corner, or we are just opening the can of a lot of curious, fundamental questions about intelligence, and it might turn out to be that it’s a lot messier than we expected. It’s a lot harder than we expected. Then building really reliable, trustworthy AI turns out to be harder than we thought. I’m not necessarily saying that that is truth, either. We just don’t know how far or close we are.
崔艺珍:我不确定这一切是否真的即将到来,或者我们只是打开了有关智能的新奇的、基础的问题的罐头,罐头里可能比我们预想的要复杂得多。这一切比我们想象的要困难得多。建立真正可靠、值得信赖的人工智能也比我们想象的要难。我并不是说这就是事实。我只是想说我们还不知道离人工智能有多远,或者离人工智能有多近。
BILL GATES: Do you see a problem where the commercial applications of this and the money going into it is a gold rush, even making the Internet gold rush seem modest? Would that possibly drain people out of academia, who are doing the important work, or do you see that happening somewhat?
比尔·盖茨:你是否看到了这样一个问题:人工智能的商业应用和资金投入就像淘金热,它甚至使互联网淘金热显得微不足道?这是否会使从事重要工作的人从学术界流失,或者你认为这种情况正在发生?
YEJIN CHOI: Unfortunately, there’s a leak from academia to industry. But actually, there’s a bigger concern for me. Whether they’re in industry or in academia, I do worry that a lot of people feel a bit hopeless, in the way that there’s really strong messages dominating the field, which is that scale is all you need, and GPT-5, 6, 7, will be even more amazing. There’s maybe nothing one can do about it. There’s a bit too much currently shifted towards the prompt engineering as the main research focus. I genuinely worry about that, everybody doing the same thing, can that be good? I do hope that people explore what happens with the bigger scale out of curiosity, but the fact that there’s so much emphasis, and all the companies, major companies, now they feel like they need to catch up with ChatGPT. I hear from many friends that there’s a lot of this internal refocus, reprioritization, which is totally understandable, but if this is a global phenomenon, that’s not healthy at all. We need to put more research effort around safeguarding AI and building alternative methods that are more compute efficient, and therefore also less carbon footprint.
崔艺珍:是的。不幸的是,学术界的人才会流向产业界。但实际上,我更担心的是。不管他们是在产业界还是在学术界,我很担心很多人会感到一丝绝望,因为在这个领域有一种非常强烈的理念主导着这个领域,那就是你只需要规模,而且,GPT-5、6、7,将会更加强大。也许人们对此无能为力。目前有太多的研究转向了提示工程作为主要的研究重点。我真的很担心,每个人都在做同样的事情,这是好事吗?我确实希望人们出于好奇心去探索更大的规模会发生什么,但事实是,所有的公司,所有的大公司,现在他们觉得他们需要赶上ChatGPT。我听很多朋友说,他们公司内部有很多这样的调整,重新确定优先事项,当然这是完全可以理解的。但如果这是一种全球现象,那就完全不对劲了。我们需要投入更多的研究工作来保护人工智能,并建立计算效率更高的替代方法,从而减少碳足迹。
BILL GATES: We need to bring math and maybe even physics people, but certainly math people. I feel lucky that I was a mathematician and then did computer science, because these models are very mathematical. Just being a programmer isn’t really the training you need for this stuff.
比尔·盖茨:我们需要数学家甚至物理学家,但一定得有数学家。我的意思是,我很幸运曾是一名数学家,后来又学习了计算机科学,这些模型都是非常数学化的。仅仅做一个程序员并不能真正胜任这些工作。
YEJIN CHOI: And currently, brute force at scale is the way to go, but there may be an alternative, where sometimes these smaller models, the specialized models do learn on a lot more specialized data, and the data is actually the key. And that data can be not just more data, but it’s better data, high quality data. Sometimes the data that was really designed to teach you that particular mathematical concept, for example. When you think about humans also, nobody learns very well just by reading random Web data. We tend to learn better when there’s a great textbook and tutorial. Similarly, I do think that this is about how to transfer knowledge or information in the most efficient way. That’s another reason, for me, why I believe that the smaller model or modest-size model could have a major edge. But that requires innovation about how to get that information, alternatively.
[music]
崔艺珍:目前,一股脑的专攻于规模是一种方法,但也可能有其他选择,有时较小的模型、专门的模型确实能从更专门的数据中学习。数据才是关键。数据不仅意味着更多,还可以更好,比如高质量的数据。例如,有时候数据是为了教你特定的数学概念而设计的。再想想人类,没有人能够仅仅通过阅读随机的网络数据就能学得很好。当有一本好的教科书和教程时,我们往往学得更好。同样,我认为这是关于如何以最有效的方式传递知识或信息。因此,对我来说,这也是我认为较小规模或中等规模的模型可能具有重大优势的另一个原因。但这需要在如何获取信息方面进行创新。
[音乐]
BILL GATES: I’ve got a turntable here and I asked you to bring in a record album.
比尔·盖茨:我这里有个转盘唱机,我邀请你带一张唱片过来。
YEJIN CHOI: This music, it’s called “Virtual Insanity.” Very relevant to our current conversation, but I used to listen to this when I used to work for you.
崔艺珍:这首歌叫做《Virtual Insanity》(译者注,意译为“虚拟的疯狂”)。跟我们现在的话题很相关,我以前为你工作时经常听这个。
BILL GATES: Oh, wow.
比尔·盖茨:哇哦。
YEJIN CHOI: Yes, here in Redmond. This was before I did a PhD. Before coming here, I was excited to learn about this Microsoft programming language package called the MFC. I don’t know if it rings a bell to you.
崔艺珍:是的,就在雷德蒙德。那是在我获得博士学位之前。来这之前,我很激动地了解了这个名为MFC的微软编程语言包。我不知道你对它是否有印象。
BILL GATES: Sure, yes.
比尔·盖茨:当然有。
YEJIN CHOI: I self-taught that, because it wasn’t really a part of the curriculum, per se. [music – “Virtual Insanity” by Jamiroquai]
崔艺珍:我基本上是自学的,因为它本身并不是课程的一部分。[音乐 - Virtual Insanity,演奏者:Jamiroquai]
YEJIN CHOI: Somehow, I found the development job. I used to listen to this. The genre is like acid jazz, but it’s not really jazz. It’s like a modern variety of it, and I believe these are like maybe UK.
崔艺珍:不知怎的,我做了开发工作,我以前常听这个。流派就像酸性爵士乐,但不是真正的爵士乐。它是一种现代的爵士乐,我认为这些可能是英国的。
BILL GATES: “Virtual Insanity,” wow.
比尔·盖茨:“虚拟的疯狂”,不错。
YEJIN CHOI: Right now, it is virtual insanity. [laughs]
崔艺珍:是的,当前就是一种虚拟的疯狂。[笑声]
BILL GATES: It’s kind of like jazz and rap. Next thing we know, we’ll have AIs not only making the tunes, but the lyrics as well.
[music fades]
比尔·盖茨:有点像爵士和说唱。接下来我们不仅会让人工智能谱曲,还得填词。
[音乐消失]
BILL GATES: What are some of the ways you’re most enthused about that AI can help us improve the world?
比尔·盖茨:你对人工智能可以帮助我们改善世界最感兴趣的方式是什么?
YEJIN CHOI: My wishful thought is AI to really better understand humans more than humans ourselves do. I think that’s fundamentally a reason why there’s a lot of conflict. There’s a lot of disagreement, and I’m hoping that we can use AI as a tool to better reflect about ourselves, and then be able to communicate to each other better, and coexist together more peacefully.
崔艺珍:我一厢情愿地认为,人工智能比人类自己更了解人类。我认为这就是为什么会有很多冲突的根本原因。我希望我们能把人工智能作为一种工具,更好地反思我们自己,然后能够更好地相互沟通,更和平地共处。
BILL GATES: I completely agree with that. It’s kind of scary, that we seem to be more polarized. Other technologies gave us hydrogen bombs and bioterrorist pathogens. It’s just a dream, because the AI is not there yet, but if it could help us understand each other and maybe reverse this trend towards polarization, that would be an incredible favor to the world. A lot of people worry about AI safety, that it doesn’t take over the world, but at the same time, maybe it can improve and reduce conflict, and improve understanding. That’s worth working on.
比尔·盖茨:是的,我完全同意这一点。可怕的一点是,我们似乎变得更加两极化了。一些技术带给了我们氢弹和生物恐怖病原体。这只是一个梦想——因为人工智能还没有实现——但如果它能帮助我们相互理解,或许能扭转这种两极化的趋势,那将是对世界莫大的帮助。很多人担心人工智能的安全性,担心它会不会接管世界,但与此同时,也许它能改善和减少冲突,增进理解。这值得我们去努力。
YEJIN CHOI: Yes.
崔艺珍:是的。
BILL GATES: Well, thank you, Yejin, for taking time. It was a fascinating conversation, and it’s going to be interesting to see where it all goes.
比尔·盖茨:谢谢你,艺珍,感谢你抽出时间。这是一次很棒的对话,让我们对未来拭目以待。
YEJIN CHOI: Likewise, thank you so much for having me here. [music]
崔艺珍:我也是,非常感谢你邀请我来这里。[音乐]
BILL GATES:Unconfuse Me is a production of The Gates Notes. Special thanks to my guest today, Yejin Choi.
比尔·盖茨:《给自己解惑》由盖茨笔记出品。特别感谢今天的嘉宾,崔艺珍。
YEJIN CHOI: To be honest, I never imagined to give a TED talk. I just don’t have that kind of personality. But I got the arm twisted to do that, because basically, the recruiting person told me that otherwise, it’s going to be just a lot of tech CEOs, who are also men.
崔艺珍:老实说,我从没想过要发表TED演讲。我就不是那样的性格。但我还是被怂恿去做了,因为招聘人员告诉我,如果不这样做,演讲者就会有很多科技公司的首席执行官,而他们都是男性。
BILL GATES: Ah! [laughs]
比尔·盖茨:啊哈![笑声]
YEJIN CHOI: That was motivating enough. She clicked the right button on me.
崔艺珍:这已经足够激励我了。她击中了我的点。